Topics in Calculus | ||||||||
---|---|---|---|---|---|---|---|---|
Fundamental theorem Limits of functions Continuity Mean value theorem
|
In calculus, the product rule (also called Leibniz's law; see derivation) is a formula used to find the derivatives of products of functions. It may be stated thus:
or in the Leibniz notation thus:
Contents |
Discovery of this rule is credited to Gottfried Leibniz, who demonstrated it using differentials. Here is Leibniz's argument: Let u(x) and v(x) be two differentiable functions of x. Then the differential of uv is
Since the term du·dv is "negligible" (compared to du and dv), Leibniz concluded that
and this is indeed the differential form of the product rule. If we divide through by the differential dx, we obtain
which can also be written in "prime notation" as
It is a common error, when studying calculus, to suppose that the derivative of (uv) equals (u ′)(v ′) (Leibniz himself made this error initially);[1] however, there are clear counterexamples to this. For a ƒ(x) whose derivative is ƒ '(x), the function can also be written as ƒ(x) · 1, since 1 is the identity element for multiplication. If the above-mentioned misconception were true, (u′)(v′) would equal zero. This is true because the derivative of a constant (such as 1) is zero and the product of ƒ '(x) · 0 is also zero.
A rigorous proof of the product rule can be given using the properties of limits and the definition of the derivative as a limit of Newton's difference quotient.
If
and ƒ and g are each differentiable at the fixed number x, then
Now the difference
is the area of the big rectangle minus the area of the small rectangle in the illustration.
The region between the smaller and larger rectangle can be split into two rectangles, the sum of whose areas is[2]
Therefore the expression in (1) is equal to
Assuming that all limits used exist, (4) is equal to
Now
because ƒ(x) remains constant as w → x;
because g is differentiable at x;
because ƒ is differentiable at x;
and now the "hard" one:
because g, being differentiable, is continuous at x.
We conclude that the expression in (5) is equal to
Suppose :
By applying Newton's difference quotient and the limit as h approaches 0, we are able to represent the derivative in the form
In order to simplify this limit we add and subtract the term to the numerator, keeping the fraction's value unchanged
This allows us to factorise the numerator like so
The fraction is split into two
The limit is applied to each term and factor of the limit expression
Each limit is evaluated. Taking into consideration the definition of the derivative, the result is
Let f = uv and suppose u and v are positive functions of x. Then
Differentiating both sides:
and so, multiplying the left side by f, and the right side by uv,
The proof appears in [1]. Note that since u, v need to be continuous, the assumption on positivity does not diminish the generality.
This proof relies on the chain rule and on the properties of the natural logarithm function, both of which are deeper than the product rule. From one point of view, that is a disadvantage of this proof. On the other hand, the simplicity of the algebra in this proof perhaps makes it easier to understand than a proof using the definition of differentiation directly.
The product rule can be considered a special case of the chain rule for several variables.
Let u and v be continuous functions in x, and let dx, du and dv be infinitesimals. This gives,
The product rule can be generalized to products of more than two factors. For example, for three factors we have
For a collection of functions , we have
It can also be generalized to the Leibniz rule for the nth derivative of a product of two factors:
See also binomial coefficient and the formally quite similar binomial theorem. See also Leibniz rule (generalized product rule).
For partial derivatives, we have
where the index S runs through the whole list of 2n subsets of {1, ..., n}. If this seems hard to understand, consider the case in which n = 3:
Suppose X, Y, and Z are Banach spaces (which includes Euclidean space) and B : X × Y → Z is a continuous bilinear operator. Then B is differentiable, and its derivative at the point (x,y) in X × Y is the linear map D(x,y)B : X × Y → Z given by
In abstract algebra, the product rule is used to define what is called a derivation, not vice versa.
The product rule extends to scalar multiplication, dot products, and cross products of vector functions.
For scalar multiplication:
For dot products:
For cross products:
(Beware: since cross products are not commutative, it is not correct to write But cross products are anticommutative, so it can be written as )
For scalar fields the concept of gradient is the analog of the derivative:
Among the applications of the product rule is a proof that
when n is a positive integer (this rule is true even if n is not positive or is not an integer, but the proof of that must rely on other methods). The proof is by mathematical induction on the exponent n. If n = 0 then xn is constant and nxn − 1 = 0. The rule holds in that case because the derivative of a constant function is 0. If the rule holds for any particular exponent n, then for the next value, n + 1, we have
Therefore if the proposition is true of n, it is true also of n + 1.